14,108 research outputs found

    Being Together: Dietrich Bonhoeffer on Human Being and Theological Ethics

    Get PDF

    Augustine contra Cicero: evaluation, affirmation, and the freedom of the will

    Get PDF

    Hidden vortex lattices in a thermally paired superfluid

    Get PDF
    We study the evolution of rotational response of a hydrodynamic model of a two-component superfluid with a non-dissipative drag interaction, as the system undergoes a transition into a paired phase at finite temperature. The transition manifests itself in a change of (i) vortex lattice symmetry, and (ii) nature of vortex state. Instead of a vortex lattice, the system forms a highly disordered tangle which constantly undergoes merger and reconnecting processes involving different types of vortices, with a "hidden" breakdown of translational symmetry.Comment: 4 pages, 5 figs. Submitted to Physical Review. Online suppl. material available; Ref. 6. V2: Fig. 1 re-sent, URL in Ref. 6 correcte

    Incorporating Side Information in Probabilistic Matrix Factorization with Gaussian Processes

    Get PDF
    Probabilistic matrix factorization (PMF) is a powerful method for modeling data associated with pairwise relationships, finding use in collaborative filtering, computational biology, and document analysis, among other areas. In many domains, there is additional information that can assist in prediction. For example, when modeling movie ratings, we might know when the rating occurred, where the user lives, or what actors appear in the movie. It is difficult, however, to incorporate this side information into the PMF model. We propose a framework for incorporating side information by coupling together multiple PMF problems via Gaussian process priors. We replace scalar latent features with functions that vary over the space of side information. The GP priors on these functions require them to vary smoothly and share information. We successfully use this new method to predict the scores of professional basketball games, where side information about the venue and date of the game are relevant for the outcome.Comment: 18 pages, 4 figures, Submitted to UAI 201

    Measured acoustic properties of variable and low density bulk absorbers

    Get PDF
    Experimental data were taken to determine the acoustic absorbing properties of uniform low density and layered variable density samples using a bulk absober with a perforated plate facing to hold the material in place. In the layered variable density case, the bulk absorber was packed such that the lowest density layer began at the surface of the sample and progressed to higher density layers deeper inside. The samples were placed in a rectangular duct and measurements were taken using the two microphone method. The data were used to calculate specific acoustic impedances and normal incidence absorption coefficients. Results showed that for uniform density samples the absorption coefficient at low frequencies decreased with increasing density and resonances occurred in the absorption coefficient curve at lower densities. These results were confirmed by a model for uniform density bulk absorbers. Results from layered variable density samples showed that low frequency absorption was the highest when the lowest density possible was packed in the first layer near the exposed surface. The layers of increasing density within the sample had the effect of damping the resonances

    Training Restricted Boltzmann Machines on Word Observations

    Get PDF
    The restricted Boltzmann machine (RBM) is a flexible tool for modeling complex data, however there have been significant computational difficulties in using RBMs to model high-dimensional multinomial observations. In natural language processing applications, words are naturally modeled by K-ary discrete distributions, where K is determined by the vocabulary size and can easily be in the hundreds of thousands. The conventional approach to training RBMs on word observations is limited because it requires sampling the states of K-way softmax visible units during block Gibbs updates, an operation that takes time linear in K. In this work, we address this issue by employing a more general class of Markov chain Monte Carlo operators on the visible units, yielding updates with computational complexity independent of K. We demonstrate the success of our approach by training RBMs on hundreds of millions of word n-grams using larger vocabularies than previously feasible and using the learned features to improve performance on chunking and sentiment classification tasks, achieving state-of-the-art results on the latter
    corecore